场景图生成(SGG)旨在在图像中提取(主题,谓词,对象)三重态。最近的作品在SGG上取得了稳步的进步,并为高级视野和语言理解提供了有用的工具。但是,由于数据分布问题包括长尾分布和语义歧​​义,当前SGG模型的预测往往会崩溃到几个频繁但不信息的谓词(例如,on,at),这限制了这些模型在下游任务中的实际应用。为了解决上述问题,我们提出了一种新颖的内部和外部数据传输(IETRAN)方法,该方法可以以插件方式应用,并以1,807个谓词类别扩展到大SGG。我们的Ietrans试图通过自动创建一个增强的数据集来缓解数据分布问题,该数据集为所有谓词提供更充分和连贯的注释。通过在增强数据集中进行培训,神经主题模型在保持竞争性微观性能的同时使宏观性能翻了一番。代码和数据可在https://github.com/waxnkw/ietrans-sgg.pytorch上公开获得。
translated by 谷歌翻译
The ability to understand and generate similes is an imperative step to realize human-level AI. However, there is still a considerable gap between machine intelligence and human cognition in similes, since deep models based on statistical distribution tend to favour high-frequency similes. Hence, a large-scale symbolic knowledge base of similes is required, as it contributes to the modeling of diverse yet unpopular similes while facilitating additional evaluation and reasoning. To bridge the gap, we propose a novel framework for large-scale simile knowledge base construction, as well as two probabilistic metrics which enable an improved understanding of simile phenomena in natural language. Overall, we construct MAPS-KB, a million-scale probabilistic simile knowledge base, covering 4.3 million triplets over 0.4 million terms from 70 GB corpora. We conduct sufficient experiments to justify the effectiveness and necessity of the methods of our framework. We also apply MAPS-KB on three downstream tasks to achieve state-of-the-art performance, further demonstrating the value of MAPS-KB.
translated by 谷歌翻译
This paper introduces a new few-shot learning pipeline that casts relevance ranking for image retrieval as binary ranking relation classification. In comparison to image classification, ranking relation classification is sample efficient and domain agnostic. Besides, it provides a new perspective on few-shot learning and is complementary to state-of-the-art methods. The core component of our deep neural network is a simple MLP, which takes as input an image triplet encoded as the difference between two vector-Kronecker products, and outputs a binary relevance ranking order. The proposed RankMLP can be built on top of any state-of-the-art feature extractors, and our entire deep neural network is called the ranking deep neural network, or RankDNN. Meanwhile, RankDNN can be flexibly fused with other post-processing methods. During the meta test, RankDNN ranks support images according to their similarity with the query samples, and each query sample is assigned the class label of its nearest neighbor. Experiments demonstrate that RankDNN can effectively improve the performance of its baselines based on a variety of backbones and it outperforms previous state-of-the-art algorithms on multiple few-shot learning benchmarks, including miniImageNet, tieredImageNet, Caltech-UCSD Birds, and CIFAR-FS. Furthermore, experiments on the cross-domain challenge demonstrate the superior transferability of RankDNN.The code is available at: https://github.com/guoqianyu-alberta/RankDNN.
translated by 谷歌翻译
基于无监督的域适应性(UDA),由于目标情景的表现有希望的表现,面部抗散热器(FAS)方法引起了人们的注意。大多数现有的UDA FAS方法通常通过对齐语义高级功能的分布来拟合受过训练的模型。但是,对未标记的目标域的监督不足,低水平特征对齐降低了现有方法的性能。为了解决这些问题,我们提出了UDA FAS的新颖观点,该视角将目标数据直接适合于模型,即,通过图像翻译将目标数据风格化为源域样式,并进一步将风格化的数据提供给训练有素的数据分类的源模型。提出的生成域适应(GDA)框架结合了两个精心设计的一致性约束:1)域间神经统计量的一致性指导发生器缩小域间间隙。 2)双层语义一致性确保了风格化图像的语义质量。此外,我们提出了域内频谱混合物,以进一步扩大目标数据分布,以确保概括并减少域内间隙。广泛的实验和可视化证明了我们方法对最新方法的有效性。
translated by 谷歌翻译
随着各种面部表现攻击不断出现,基于域概括(DG)的面部抗散热(FAS)方法引起了人们的注意。现有的基于DG的FAS方法始终捕获用于概括各种看不见域的域不变功能。但是,他们忽略了单个源域的歧视性特征和不同域的不同域特异性信息,并且训练有素的模型不足以适应各种看不见的域。为了解决这个问题,我们提出了专家学习(AMEL)框架的自适应混合物,该框架利用了特定于域的信息以适应性地在可见的源域和看不见的目标域之间建立链接,以进一步改善概括。具体而言,特定领域的专家(DSE)旨在研究歧视性和独特的域特异性特征,以作为对共同域不变特征的补充。此外,提出了动态专家聚合(DEA),以根据与看不见的目标域相关的域相关的每个源专家的互补信息来自适应地汇总信息。并结合元学习,这些模块合作,可适应各种看不见的目标域的有意义的特定于域特异性信息。广泛的实验和可视化证明了我们对最先进竞争者的方法的有效性。
translated by 谷歌翻译
视频显着对象检测模型在像素密集注释上训练有素的训练有素,已经达到了出色的性能,但获得像素逐像素注释的数据集很费力。尚未探索几项作品,试图使用涂鸦注释来缓解这个问题,但是尚未探讨点监督作为一种更节省劳动的注释方法(即使是对密集预测的手动注释方法中最多的劳动方法)。在本文中,我们提出了一个基于点监督的强基线模型。为了使用时间信息来推断显着性图,我们分别从短期和长期角度挖掘了框架间的互补信息。具体而言,我们提出了一个混合令牌注意模块,该模块将光流和图像信息从正交方向混合在一起,自适应地突出了关键的光流信息(通道维度)和关键令牌信息(空间维度)。为了利用长期提示,我们开发了长期的跨框架注意模块(LCFA),该模块有助于当前框架基于多框架代币推断出显着对象。此外,我们通过重新标记Davis和DavSod数据集来标记两个分配的数据集P-Davis和P-Davsod。六个基准数据集的实验说明了我们的方法优于先前的最先进的弱监督方法,甚至与某些完全监督的方法相当。源代码和数据集可用。
translated by 谷歌翻译
知识嵌入(KE)通过将实体和关系嵌入连续的向量空间来表示知识图(kg)。现有方法主要基于结构或基于描述。基于结构的方法学习保留KGS固有结构的表示。它们不能很好地代表具有有限结构信息的现实世界中的丰富长尾实体。基于描述的方法利用文本信息和语言模型。朝这个方向迈出的先前方法几乎不能胜过基于结构的结构,并且遇到了昂贵的负面抽样和限制性描述需求等问题。在本文中,我们提出了LMKE,该LMKE采用语言模型来得出知识嵌入,旨在既富集了长尾实体的表示形式又旨在解决先前的基于描述的方法的问题。我们通过对比度学习框架制定基于描述的KE学习,以提高培训和评估的效率。实验结果表明,LMKE在链接预测和三重分类的KE基准上实现了最先进的性能,尤其是对于长尾实体。
translated by 谷歌翻译
当前的最新显着性检测模型在很大程度上依赖于精确的像素注释的大型数据集,但是手动标记像素是时必的且劳动力密集的。有一些用于减轻该问题的弱监督方法,例如图像标签,边界框标签和涂鸦标签,而在该领域仍未探索点标签。在本文中,我们提出了一种使用点监督的新型弱监督的显着对象检测方法。为了推断显着性图,我们首先设计了一种自适应掩盖洪水填充算法以生成伪标签。然后,我们开发了一个基于变压器的点保护显着性检测模型,以产生第一轮显着图。但是,由于标签的稀疏性,弱监督模型倾向于退化为一般​​的前景检测模型。为了解决这个问题,我们提出了一种非征服方法(NSS)方法,以优化第一轮中产生的错误显着图,并利用它们进行第二轮训练。此外,我们通过重新标记DUTS数据集来构建一个新的监督数据集(P-DUTS)。在p-duts中,每个显着对象只有一个标记点​​。在五个最大基准数据集上进行的全面实验表明,我们的方法的表现优于先前的最先进方法,该方法接受了更强的监督,甚至超过了几种完全监督的最先进模型。该代码可在以下网址获得:https://github.com/shuyonggao/psod。
translated by 谷歌翻译
Detection Transformer (DETR) and Deformable DETR have been proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance as previous complex hand-crafted detectors. However, their performance on Video Object Detection (VOD) has not been well explored. In this paper, we present TransVOD, the first end-to-end video object detection system based on spatial-temporal Transformer architectures. The first goal of this paper is to streamline the pipeline of VOD, effectively removing the need for many hand-crafted components for feature aggregation, e.g., optical flow model, relation networks. Besides, benefited from the object query design in DETR, our method does not need complicated post-processing methods such as Seq-NMS. In particular, we present a temporal Transformer to aggregate both the spatial object queries and the feature memories of each frame. Our temporal transformer consists of two components: Temporal Query Encoder (TQE) to fuse object queries, and Temporal Deformable Transformer Decoder (TDTD) to obtain current frame detection results. These designs boost the strong baseline deformable DETR by a significant margin (2 %-4 % mAP) on the ImageNet VID dataset. TransVOD yields comparable performances on the benchmark of ImageNet VID. Then, we present two improved versions of TransVOD including TransVOD++ and TransVOD Lite. The former fuses object-level information into object query via dynamic convolution while the latter models the entire video clips as the output to speed up the inference time. We give detailed analysis of all three models in the experiment part. In particular, our proposed TransVOD++ sets a new state-of-the-art record in terms of accuracy on ImageNet VID with 90.0 % mAP. Our proposed TransVOD Lite also achieves the best speed and accuracy trade-off with 83.7 % mAP while running at around 30 FPS on a single V100 GPU device. Code and models will be available for further research.
translated by 谷歌翻译
近年来,对语义分割的无监督域适应性(UDA)进行了充分研究。但是,大多数现有的作品在很大程度上忽略了不同领域的本地区域一致性,并且对室外环境的变化的鲁棒性较低。在本文中,我们提出了一种新颖且完全端到端的可训练方法,称为域自适应语义分割的区域对比度一致性(RCCR)。我们的核心思想是从不同图像的相同位置提取的相似区域特征,即原始图像和增强图像,以更加接近,同时将两个图像的不同位置的特征推到要分开的不同位置。我们通过两种抽样策略提出了一个区域对比度损失,以实现有效的区域一致性。此外,我们呈现动力投影头,其中教师投射头是学生的指数移动平均值。最后,内存库机制旨在在不同的环境下学习更健壮和稳定的区域特征。对两个常见的UDA基准测试的广泛实验,即GTAV到CityScapes和CityScapes的合成,这表明我们的方法表现优于最先进的方法。
translated by 谷歌翻译